New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
KAFKA-10199: Consider tasks in state updater when computing offset sums #13925
Conversation
With the state updater, the task manager needs also to look into the tasks owned by the state updater when computing the sum of offsets of the state. This sum of offsets is used by the high availability assignor to assign warm-up replicas. If the task manager does not take into account tasks in the state updater, a warm-up replica will never report back that the state for the corresponding task has caught up. Consequently, the warm-up replica will never be dismissed and probing rebalances will never end..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, just one minor change to consider
} | ||
} catch (final IOException e) { | ||
log.warn(String.format("Exception caught while trying to read checkpoint for task %s:", id), e); | ||
createdAndClosedTasks.add(task.id()); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nit: if you want to do it with fewer collections, you could inititialize lockedTaskDirectoriesOfNonOwnedTasks
earlier, and just remove directly from that set in the if
branch, instead of adding to createdAndClosedTasks
in the else
branch.
// Closed and uninitialized tasks don't have any offsets so we should read directly from the checkpoint | ||
if (task != null && task.state() != State.CREATED && task.state() != State.CLOSED) { | ||
final Map<TaskId, Task> tasks = allTasks(); | ||
final Set<TaskId> lockedTaskDirectoriesOfNonOwnedTasksAndClosedAndCreatedTasks = |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I recommended this change thinking that lockedTaskDirectories
always includes all ClosedAndCreatedTasks
-- I think it does right? So it should be enough to assign this to lockedTaskDirectories
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I do not think there is guarantee that lockedTaskDirectories
contains any tasks the client owns. lockedTaskDirectories
are just the non-empty task directories in the state directory when a rebalance starts. However, a task directory is created when a task is created, i.e., it is in state CREATE
. A task directory is not deleted when a task is closed, i.e., in state CLOSED
. This might be a correlation and not a thought-out invariant. At least, the original code did not rely on this since it used union(HashSet::new, lockedTaskDirectories, tasks.allTaskIds())
.
I am also somehow reluctant to rely on such -- IMO -- brittle invariant.
As an example, in future we could decide to move the creation of the task directory to other parts of the code -- like when the task is initialized -- which would mean that there is a interval in which the task is in state CREATED
but does not have a task directory.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Well, if there is no task directory, there is no checkpoint to process. So it's safe to not do anything in this case.
All you'd do by adding more tasks is to later skip on the check checkPointFile.exists()
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What could make the simplified code break, is that we decide to not release the lock before transitioning to the CLOSED
state.
So yeah, being defensive here and going through all CREATED
and CLOSED
tasks as well to make sure that they do not have state directories that are locked but not inside lockedTaskDirectories
sound good to me as well.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
OK, I agree with you!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Let's be defensive then!
for (final TaskDirectory taskDir : stateDirectory.listNonEmptyTaskDirectories()) { | ||
final File dir = taskDir.file(); | ||
final String namedTopology = taskDir.namedTopology(); | ||
try { | ||
final TaskId id = parseTaskDirectoryName(dir.getName(), namedTopology); | ||
if (stateDirectory.lock(id)) { | ||
lockedTaskDirectories.add(id); | ||
if (!tasks.contains(id)) { | ||
if (!allTasks.containsKey(id)) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For this debug log, we did only consider tasks owned by the stream thread.
@@ -1141,25 +1141,30 @@ public Map<TaskId, Long> getTaskOffsetSums() { | |||
// Not all tasks will create directories, and there may be directories for tasks we don't currently own, | |||
// so we consider all tasks that are either owned or on disk. This includes stateless tasks, which should | |||
// just have an empty changelogOffsets map. | |||
for (final TaskId id : union(HashSet::new, lockedTaskDirectories, tasks.allTaskIds())) { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems with the state updated enabled, tasks
is actually only containing "running tasks". It seems appropriate the rename this variable to runningTasks
(can also happen in a follow up PR).
I am actually also wondering if we still need this Tasks
container any longer to begin with? The purpose of the Tasks
container was to simplify TaskManager
that manages both active and standby tasks. With the state updated (from my understanding) the TaskManager
only manages active tasks, while standby tasks will be owned by the state-updated-thread (would it still be useful for the state-updated-thread to use Tasks
container, given that is also own active tasks as long as they are restoring?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems with the state updated enabled, tasks is actually only containing "running tasks". It seems appropriate the rename this variable to runningTasks (can also happen in a follow up PR).
The old code path with disabled state updater does still exist and we can disable the state updater if we encounter a major bug after releasing. So, I would postpone such renamings to the removal of the old code path.
I am actually also wondering if we still need this Tasks container any longer to begin with?
I would keep it, because it allows to cleanly set a specific state of the task manager in unit tests. Anyways, I would wait for the upcoming thread refactoring to make such changes.
would it still be useful for the state-updated-thread to use Tasks container, given that is also own active tasks as long as they are restoring?
I do not think so, since access by the state updater would imply that the tasks registry (aka tasks container) needs to be concurrently accessed. For this reason, we defined a invariant, that a task can only be owned either by the stream thread or by the state updater, but not both. Sharing the tasks registry between stream thread and state updater would break that invariant. If you meant to use an separate instance of the tasks registry for the state updater, that would be not useful IMO.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
LGTM
Build failures are unrelated:
|
With the state updater, the task manager needs also to look into the tasks owned by the state updater when computing the sum of offsets of the state. This sum of offsets is used by the high availability assignor to assign warm-up replicas.
If the task manager does not take into account tasks in the state updater, a warm-up replica will never report back that the state for the corresponding task has caught up. Consequently, the warm-up replica will never be dismissed and probing rebalances will never end..
Committer Checklist (excluded from commit message)